skip to main content


Search for: All records

Creators/Authors contains: "Cui, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper studies the fundamental problem of multi-layer generator models in learning hierarchical representations. The multi-layer generator model that consists of multiple layers of latent variables organized in a top-down architecture tends to learn multiple levels of data abstraction. However, such multi-layer latent variables are typically parameterized to be Gaussian, which can be less informative in capturing complex abstractions, resulting in limited success in hierarchical representation learning. On the other hand, the energy-based (EBM) prior is known to be expressive in capturing the data regularities, but it often lacks the hierarchical structure to capture different levels of hierarchical representations. In this paper, we propose a joint latent space EBM prior model with multi-layer latent variables for effective hierarchical representation learning. We develop a variational joint learning scheme that seamlessly integrates an inference model for efficient inference. Our experiments demonstrate that the proposed joint EBM prior is effective and expressive in capturing hierarchical representations and modeling data distribution. 
    more » « less
    Free, publicly-accessible full text available September 1, 2024
  2. Objective. Dynamic positron emission tomography (PET) imaging, which can provide information on dynamic changes in physiological metabolism, is now widely used in clinical diagnosis and cancer treatment. However, the reconstruction from dynamic data is extremely challenging due to the limited counts received in individual frame, especially in ultra short frames. Recently, the unrolled modelbased deep learning methods have shown inspiring results for low-count PET image reconstruction with good interpretability. Nevertheless, the existing model-based deep learning methods mainly focus on the spatial correlations while ignore the temporal domain. Approach. In this paper, inspired by the learned primal dual (LPD) algorithm, we propose the spatio-temporal primal dual network (STPDnet) for dynamic low-count PET image reconstruction. Both spatial and temporal correlations are encoded by 3D convolution operators. The physical projection of PET is embedded in the iterative learning process of the network, which provides the physical constraints and enhances interpretability. Main results. The experiments of both simulation data and real rat scan data have shown that the proposed method can achieve substantial noise reduction in both temporal and spatial domains and outperform the maximum likelihood expectation maximization, spatio-temporal kernel method, LPD and FBPnet. Significance. Experimental results show STPDnet better reconstruction performance in the low count situation, which makes the proposed method particularly suitable in whole-body dynamic imaging and parametric PET imaging that require extreme short frames and usually suffer from high level of noise. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  3. Free, publicly-accessible full text available September 1, 2024
  4. This paper studies the fundamental problem of learning multi-layer generator models. The multi-layer generator model builds multiple layers of latent variables as a prior model on top of the generator, which benefits learning complex data distribution and hierarchical representations. However, such a prior model usually focuses on modeling inter-layer relations between latent variables by assuming non-informative (conditional) Gaussian distributions, which can be limited in model expressivity. To tackle this issue and learn more expressive prior models, we propose an energy-based model (EBM) on the joint latent space over all layers of latent variables with the multi-layer generator as its backbone. Such joint latent space EBM prior model captures the intra-layer contextual relations at each layer through layer-wise energy terms, and latent variables across different layers are jointly corrected. We develop a joint training scheme via maximum likelihood estimation (MLE), which involves Markov Chain Monte Carlo (MCMC) sampling for both prior and posterior distributions of the latent variables from different layers. To ensure efficient inference and learning, we further propose a variational training scheme where an inference model is used to amortize the costly posterior MCMC sampling. Our experiments demonstrate that the learned model can be expressive in generating high-quality images and capturing hierarchical features for better outlier detection. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  5. Free, publicly-accessible full text available May 1, 2024
  6. ABSTRACT

    The Venusian clouds originate from the binary condensation of H2SO4 and H2O. The two components strongly interact with each other via chemistry and cloud formation. Previous works adopted sophisticated microphysical approaches to understand the clouds. Here, we show that the observed vapour and cloud distributions on Venus can be well explained by a semi-analytical model. Our model assumes local thermodynamical equilibrium for water vapour but not for sulphuric acid vapour, and includes the feedback of cloud condensation and acidity to vapour distributions. The model predicts strong supersaturation of the H2SO4 vapour above 60 km, consistent with our recent cloud condensation model. The semi-analytical model is 100 times faster than the condensation model and 1000 times faster than the microphysical models. This allows us to quickly explore a large parameter space of the sulphuric acid gas-cloud system. We found that the cloud mass loading in the upper clouds has an opposite response of that in the lower clouds to the vapour mixing ratios in the lower atmosphere. The transport of water vapour influences the cloud acidity in all cloud layers, while the transport of sulphuric acid vapour only dominates in the lower clouds. This cloud model is fast enough to be coupled with the climate models and chemistry models to understand the cloudy atmospheres of Venus and Venus-like extra-solar planets.

     
    more » « less
  7. null (Ed.)
  8. Quantum entanglement involving coherent superpositions of macroscopically distinct states is among the most striking features of quantum theory, but its realization is challenging because such states are extremely fragile. Using a programmable quantum simulator based on neutral atom arrays with interactions mediated by Rydberg states, we demonstrate the creation of “Schrödinger cat” states of the Greenberger-Horne-Zeilinger (GHZ) type with up to 20 qubits. Our approach is based on engineering the energy spectrum and using optimal control of the many-body system. We further demonstrate entanglement manipulation by using GHZ states to distribute entanglement to distant sites in the array, establishing important ingredients for quantum information processing and quantum metrology. 
    more » « less